Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 16043, 2023 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-37749176

RESUMO

This study aimed to evaluate the use of novel optomyography (OMG) based smart glasses, OCOsense, for the monitoring and recognition of facial expressions. Experiments were conducted on data gathered from 27 young adult participants, who performed facial expressions varying in intensity, duration, and head movement. The facial expressions included smiling, frowning, raising the eyebrows, and squeezing the eyes. The statistical analysis demonstrated that: (i) OCO sensors based on the principles of OMG can capture distinct variations in cheek and brow movements with a high degree of accuracy and specificity; (ii) Head movement does not have a significant impact on how well these facial expressions are detected. The collected data were also used to train a machine learning model to recognise the four facial expressions and when the face enters a neutral state. We evaluated this model in conditions intended to simulate real-world use, including variations in expression intensity, head movement and glasses position relative to the face. The model demonstrated an overall accuracy of 93% (0.90 f1-score)-evaluated using a leave-one-subject-out cross-validation technique.


Assuntos
Reconhecimento Facial , Óculos Inteligentes , Adulto Jovem , Humanos , Expressão Facial , Sorriso , Movimento , Emoções
2.
Front Psychiatry ; 14: 1232433, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37614653

RESUMO

Background: Continuous assessment of affective behaviors could improve the diagnosis, assessment and monitoring of chronic mental health and neurological conditions such as depression. However, there are no technologies well suited to this, limiting potential clinical applications. Aim: To test if we could replicate previous evidence of hypo reactivity to emotional salient material using an entirely new sensing technique called optomyography which is well suited to remote monitoring. Methods: Thirty-eight depressed and 37 controls (≥18, ≤40 years) who met a research diagnosis of depression and an age-matched non-depressed control group. Changes in facial muscle activity over the brow (corrugator supercilli) and cheek (zygomaticus major) were measured whilst volunteers watched videos varying in emotional salience. Results: Across all participants, videos rated as subjectively positive were associated with activation of muscles in the cheek relative to videos rated as neutral or negative. Videos rated as subjectively negative were associated with brow activation relative to videos judged as neutral or positive. Self-reported arousal was associated with a step increase in facial muscle activation across the brow and cheek. Group differences were significantly reduced activation in facial muscles during videos considered subjectively negative or rated as high arousal in depressed volunteers compared with controls. Conclusion: We demonstrate for the first time that it is possible to detect facial expression hypo-reactivity in adults with depression in response to emotional content using glasses-based optomyography sensing. It is hoped these results may encourage the use of optomyography-based sensing to track facial expressions in the real-world, outside of a specialized testing environment.

3.
Sensors (Basel) ; 23(10)2023 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-37430716

RESUMO

The estimation of human mobility patterns is essential for many components of developed societies, including the planning and management of urbanization, pollution, and disease spread. One important type of mobility estimator is the next-place predictors, which use previous mobility observations to anticipate an individual's subsequent location. So far, such predictors have not yet made use of the latest advancements in artificial intelligence methods, such as General Purpose Transformers (GPT) and Graph Convolutional Networks (GCNs), which have already achieved outstanding results in image analysis and natural language processing. This study explores the use of GPT- and GCN-based models for next-place prediction. We developed the models based on more general time series forecasting architectures and evaluated them using two sparse datasets (based on check-ins) and one dense dataset (based on continuous GPS data). The experiments showed that GPT-based models slightly outperformed the GCN-based models with a difference in accuracy of 1.0 to 3.2 percentage points (p.p.). Furthermore, Flashback-LSTM-a state-of-the-art model specifically designed for next-place prediction on sparse datasets-slightly outperformed the GPT-based and GCN-based models on the sparse datasets (1.0 to 3.5 p.p. difference in accuracy). However, all three approaches performed similarly on the dense dataset. Given that future use cases will likely involve dense datasets provided by GPS-enabled, always-connected devices (e.g., smartphones), the slight advantage of Flashback on the sparse datasets may become increasingly irrelevant. Given that the performance of the relatively unexplored GPT- and GCN-based solutions was on par with state-of-the-art mobility prediction models, we see a significant potential for them to soon surpass today's state-of-the-art approaches.


Assuntos
Inteligência Artificial , Fontes de Energia Elétrica , Humanos , Poluição Ambiental , Processamento de Imagem Assistida por Computador , Processamento de Linguagem Natural
4.
Sci Rep ; 12(1): 16876, 2022 10 07.
Artigo em Inglês | MEDLINE | ID: mdl-36207524

RESUMO

Using a novel wearable surface electromyography (sEMG), we investigated induced affective states by measuring the activation of facial muscles traditionally associated with positive (left/right orbicularis and left/right zygomaticus) and negative expressions (the corrugator muscle). In a sample of 38 participants that watched 25 affective videos in a virtual reality environment, we found that each of the three variables examined-subjective valence, subjective arousal, and objective valence measured via the validated video types (positive, neutral, and negative)-sEMG amplitude varied significantly depending on video content. sEMG aptitude from "positive muscles" increased when participants were exposed to positively valenced stimuli compared with stimuli that was negatively valenced. In contrast, activation of "negative muscles" was elevated following exposure to negatively valenced stimuli compared with positively valenced stimuli. High arousal videos increased muscle activations compared to low arousal videos in all the measured muscles except the corrugator muscle. In line with previous research, the relationship between sEMG amplitude as a function of subjective valence was V-shaped.


Assuntos
Músculos Faciais , Dispositivos Eletrônicos Vestíveis , Afeto/fisiologia , Nível de Alerta/fisiologia , Eletromiografia , Emoções/fisiologia , Face/fisiologia , Expressão Facial , Músculos Faciais/fisiologia , Humanos
5.
Front Artif Intell ; 5: 867046, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35837615

RESUMO

Human mobility modeling is a complex yet essential subject of study related to modeling important spatiotemporal events, including traffic, disease spreading, and customized directions and recommendations. While spatiotemporal data can be collected easily via smartphones, current state-of-the-art deep learning methods require vast amounts of such privacy-sensitive data to generate useful models. This work investigates the creation of spatiotemporal models using a Federated Learning (FL) approach-a machine learning technique that avoids sharing personal data with centralized servers. More specifically, we examine three centralized models for next-place prediction: a simple Gated Recurrent Unit (GRU) model, as well as two state-of-the-art centralized approaches, Flashback and DeepMove. Flashback is a Recurrent Neural Network (RNN) that utilizes historical hidden states with similar context as the current spatiotemporal context to improve performance. DeepMove is an attentional RNN that aims to capture human mobility's regularity while coping with data sparsity. We then implemented models based on FL for the two best-performing centralized models. We compared the performance of all models using two large public datasets: Foursquare (9,450 million check-ins, February 2009 to October 2010) and Gowalla (3,300 million check-ins, April 2012 to January 2014). We first replicated the performance of both Flashback and DeepMove, as reported in the original studies, and compared them to the simple GRU model. Flashback and GRU proved to be the best performing centralized models, so we further explored both in FL scenarios, including several parameters such as the number of clients, rounds, and epochs. Our results indicated that the training process of the federated models was less stable, i.e., the FL versions of both Flashback and GRU tended to have higher variability in the loss curves. The higher variability led to a slower convergence and thus a poorer performance when compared to the corresponding centralized models. Model performance was also highly influenced by the number of federated clients and the sparsity of the evaluation dataset. We additionally provide insights into the technical challenges of applying FL to state-of-the-art deep learning methods for human mobility.

6.
Sensors (Basel) ; 22(10)2022 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-35632022

RESUMO

From 2018 to 2021, the Sussex-Huawei Locomotion-Transportation Recognition Challenge presented different scenarios in which participants were tasked with recognizing eight different modes of locomotion and transportation using sensor data from smartphones. In 2019, the main challenge was using sensor data from one location to recognize activities with sensors in another location, while in the following year, the main challenge was using the sensor data of one person to recognize the activities of other persons. We use these two challenge scenarios as a framework in which to analyze the effectiveness of different components of a machine-learning pipeline for activity recognition. We show that: (i) selecting an appropriate (location-specific) portion of the available data for training can improve the F1 score by up to 10 percentage points (p. p.) compared to a more naive approach, (ii) separate models for human locomotion and for transportation in vehicles can yield an increase of roughly 1 p. p., (iii) using semi-supervised learning can, again, yield an increase of roughly 1 p. p., and (iv) temporal smoothing of predictions with Hidden Markov models, when applicable, can bring an improvement of almost 10 p. p. Our experiments also indicate that the usefulness of advanced feature selection techniques and clustering to create person-specific models is inconclusive and should be explored separately in each use-case.


Assuntos
Algoritmos , Aprendizado de Máquina Supervisionado , Humanos , Locomoção , Aprendizado de Máquina , Smartphone
7.
Sensors (Basel) ; 22(6)2022 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-35336250

RESUMO

Breathing rate is considered one of the fundamental vital signs and a highly informative indicator of physiological state. Given that the monitoring of heart activity is less complex than the monitoring of breathing, a variety of algorithms have been developed to estimate breathing activity from heart activity. However, estimating breathing rate from heart activity outside of laboratory conditions is still a challenge. The challenge is even greater when new wearable devices with novel sensor placements are being used. In this paper, we present a novel algorithm for breathing rate estimation from photoplethysmography (PPG) data acquired from a head-worn virtual reality mask equipped with a PPG sensor placed on the forehead of a subject. The algorithm is based on advanced signal processing and machine learning techniques and includes a novel quality assessment and motion artifacts removal procedure. The proposed algorithm is evaluated and compared to existing approaches from the related work using two separate datasets that contains data from a total of 37 subjects overall. Numerous experiments show that the proposed algorithm outperforms the compared algorithms, achieving a mean absolute error of 1.38 breaths per minute and a Pearson's correlation coefficient of 0.86. These results indicate that reliable estimation of breathing rate is possible based on PPG data acquired from a head-worn device.


Assuntos
Fotopletismografia , Taxa Respiratória , Frequência Cardíaca/fisiologia , Humanos , Aprendizado de Máquina , Fotopletismografia/métodos , Processamento de Sinais Assistido por Computador
8.
Artigo em Inglês | MEDLINE | ID: mdl-34201618

RESUMO

The COVID-19 pandemic affected the whole world, but not all countries were impacted equally. This opens the question of what factors can explain the initial faster spread in some countries compared to others. Many such factors are overshadowed by the effect of the countermeasures, so we studied the early phases of the infection when countermeasures had not yet taken place. We collected the most diverse dataset of potentially relevant factors and infection metrics to date for this task. Using it, we show the importance of different factors and factor categories as determined by both statistical methods and machine learning (ML) feature selection (FS) approaches. Factors related to culture (e.g., individualism, openness), development, and travel proved the most important. A more thorough factor analysis was then made using a novel rule discovery algorithm. We also show how interconnected these factors are and caution against relying on ML analysis in isolation. Importantly, we explore potential pitfalls found in the methodology of similar work and demonstrate their impact on COVID-19 data analysis. Our best models using the decision tree classifier can predict the infection class with roughly 80% accuracy.


Assuntos
COVID-19 , Algoritmos , Humanos , Aprendizado de Máquina , Pandemias , SARS-CoV-2
9.
Sensors (Basel) ; 20(22)2020 Nov 16.
Artigo em Inglês | MEDLINE | ID: mdl-33207564

RESUMO

To further extend the applicability of wearable sensors in various domains such as mobile health systems and the automotive industry, new methods for accurately extracting subtle physiological information from these wearable sensors are required. However, the extraction of valuable information from physiological signals is still challenging-smartphones can count steps and compute heart rate, but they cannot recognize emotions and related affective states. This study analyzes the possibility of using end-to-end multimodal deep learning (DL) methods for affect recognition. Ten end-to-end DL architectures are compared on four different datasets with diverse raw physiological signals used for affect recognition, including emotional and stress states. The DL architectures specialized for time-series classification were enhanced to simultaneously facilitate learning from multiple sensors, each having their own sampling frequency. To enable fair comparison among the different DL architectures, Bayesian optimization was used for hyperparameter tuning. The experimental results showed that the performance of the models depends on the intensity of the physiological response induced by the affective stimuli, i.e., the DL models recognize stress induced by the Trier Social Stress Test more successfully than they recognize emotional changes induced by watching affective content, e.g., funny videos. Additionally, the results showed that the CNN-based architectures might be more suitable than LSTM-based architectures for affect recognition from physiological sensors.


Assuntos
Afeto , Aprendizado Profundo , Emoções , Monitorização Fisiológica , Teorema de Bayes , Frequência Cardíaca , Humanos , Reconhecimento Automatizado de Padrão
10.
Sensors (Basel) ; 18(4)2018 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-29641430

RESUMO

BACKGROUND: Blood pressure (BP) measurements have been used widely in clinical and private environments. Recently, the use of ECG monitors has proliferated; however, they are not enabled with BP estimation. We have developed a method for BP estimation using only electrocardiogram (ECG) signals. METHODS: Raw ECG data are filtered and segmented, and, following this, a complexity analysis is performed for feature extraction. Then, a machine-learning method is applied, combining a stacking-based classification module and a regression module for building systolic BP (SBP), diastolic BP (DBP), and mean arterial pressure (MAP) predictive models. In addition, the method allows a probability distribution-based calibration to adapt the models to a particular user. RESULTS: Using ECG recordings from 51 different subjects, 3129 30-s ECG segments are constructed, and seven features are extracted. Using a train-validation-test evaluation, the method achieves a mean absolute error (MAE) of 8.64 mmHg for SBP, 18.20 mmHg for DBP, and 13.52 mmHg for the MAP prediction. When models are calibrated, the MAE decreases to 7.72 mmHg for SBP, 9.45 mmHg for DBP and 8.13 mmHg for MAP. CONCLUSION: The experimental results indicate that, when a probability distribution-based calibration is used, the proposed method can achieve results close to those of a certified medical device for BP estimation.


Assuntos
Pressão Sanguínea , Determinação da Pressão Arterial , Calibragem , Eletrocardiografia , Humanos , Aprendizado de Máquina
11.
J Biomed Inform ; 73: 159-170, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28803947

RESUMO

Being able to detect stress as it occurs can greatly contribute to dealing with its negative health and economic consequences. However, detecting stress in real life with an unobtrusive wrist device is a challenging task. The objective of this study is to develop a method for stress detection that can accurately, continuously and unobtrusively monitor psychological stress in real life. First, we explore the problem of stress detection using machine learning and signal processing techniques in laboratory conditions, and then we apply the extracted laboratory knowledge to real-life data. We propose a novel context-based stress-detection method. The method consists of three machine-learning components: a laboratory stress detector that is trained on laboratory data and detects short-term stress every 2min; an activity recognizer that continuously recognizes the user's activity and thus provides context information; and a context-based stress detector that uses the outputs of the laboratory stress detector, activity recognizer and other contexts, in order to provide the final decision on 20-min intervals. Experiments on 55days of real-life data showed that the method detects (recalls) 70% of the stress events with a precision of 95%.


Assuntos
Aprendizado de Máquina , Monitorização Fisiológica , Processamento de Sinais Assistido por Computador , Estresse Psicológico , Humanos , Acontecimentos que Mudam a Vida , Punho
12.
Sensors (Basel) ; 16(6)2016 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-27258282

RESUMO

Although wearable accelerometers can successfully recognize activities and detect falls, their adoption in real life is low because users do not want to wear additional devices. A possible solution is an accelerometer inside a wrist device/smartwatch. However, wrist placement might perform poorly in terms of accuracy due to frequent random movements of the hand. In this paper we perform a thorough, large-scale evaluation of methods for activity recognition and fall detection on four datasets. On the first two we showed that the left wrist performs better compared to the dominant right one, and also better compared to the elbow and the chest, but worse compared to the ankle, knee and belt. On the third (Opportunity) dataset, our method outperformed the related work, indicating that our feature-preprocessing creates better input data. And finally, on a real-life unlabeled dataset the recognized activities captured the subject's daily rhythm and activities. Our fall-detection method detected all of the fast falls and minimized the false positives, achieving 85% accuracy on the first dataset. Because the other datasets did not contain fall events, only false positives were evaluated, resulting in 9 for the second, 1 for the third and 15 for the real-life dataset (57 days data).


Assuntos
Acelerometria/instrumentação , Acidentes por Quedas/prevenção & controle , Monitorização Fisiológica/instrumentação , Atividades Cotidianas , Algoritmos , Humanos , Dispositivos Eletrônicos Vestíveis , Punho/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...